perm filename HOMONC.ULU[RDG,DBL] blob sn#519844 filedate 1980-08-22 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00007 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002		What is Expertise?
C00007 00003		Definitions
C00008 00004		Theories, Working Models, and the Like
C00012 00005	Scenario:
C00014 00006		Homonculus
C00015 00007	Note RLL just handles Data, and its organization. This system would be capable
C00019 ENDMK
CāŠ—;
	What is Expertise?
What does a Nobel Lauraete has which seperates him from the rest
of the populus?".
Answering "Expertise in his field" only begs the question 
-- what exactly is expertise?
It must begin by including a tremendous amount of knowledge (whatever that is).
But raw data, by itself, is clearly insufficient.
(Otherwise we would be forced to conclude every
advanced text is an expert.)
The information must be in a usable form; and the expert must know how to use it
for the task at hand.
This necessitates knowledge describing when each bit of knowledge is applicable;
and at a higher level, when to use a particular sub-model over another.
A bona fide expert must have this type of information, in abundance,
to navigate within the space of solution methodologies.

In the last decade numerous AI tools and methods have evolved
to deal with these issues.
The entire rule formalism may be viewed as a large part of this attempt;
before that, predicate calculus, and theories of semantics apparently had
similar goals.
However, each technique seem applicable only to its particular domain,
(such as infectious diseases,) and only
for its specific type of task (eg diagnosis).
Attempts to use the same domain data for another task, say teaching, has often
exposed the gaps and limitation of this cut at the data.

To reiterate, expertise must contain (1) a wealth of raw information and
(2) procedures which dictate how to utilize this information,
and when to use each part.
Furthermore, experience has shown the overall routines, used for the particular
application, are quite dependent on the actual type of task;
this dependency seems to have a major impact on the nature of the stored data
itself.

The apparent conclusion is that expertise must be relative to both the
domain, and the particular type of task.
This is not surprising -- ingenious clinicians may be very poor at deciding which
experiment to try next; and it is the exception, not the rule, that
great experimenter are also good teachers.

A major difficulty encountered in designing expert systems is the incredibly
ill-defined nature of this task.
We do not have even a toehold,
(much less a comprehensive theory) of what is meant by expertise.
This paper will attempt to address the requirements we place on any system
(human or machine) before we will claim it is an expert; and then comment by
outlining a system, Homonculus, which should be capable of building up such
"experts"; being itself an expert in that `expert-construction' task.
	Definitions
Let me begin by defining some terms.

Expertise:
Given some task, <Y>, an expert in domain <X> is able to build/access
a working model of <X> applicable to <Y>, and use it effectively.

<X> might be Medicine, Chemistry, Molecular Genetics, ...
<Y> might be Diagnoses, Experiment Planning, Teaching, Conjecturing, Data Input ...

Working Model:
A set of facts, together with rules pertaining to applicability.
(Like RWW's LS-pairs)
Capable of simulating `real world' - eg Symbolic Execution [modulo Frame Problem]

Model:

Theory:

Simulation Structure:
	Theories, Working Models, and the Like

Assumption: The structure of the rep'n depends mostly on the domain.
However, what fills it in (eg grain size) is a function of the task.
Similarly the basic calling sequence of the various routines is essentially
invariant to a given task.
However these acutal functions with vary with each application.

[Is this the inference engine vs data arrangement?]

The structure of the representations may have to change from domain to domain, but 
the actual data filling out this structure is highly dependent on the task;
there is a similar need to adjust the accessing/utilizing routines.

Even this is insufficient: where are things like creativity? How does one decide
the new directions in which to explore. For this one has to develope a "feel",
a gestalt, for this full domain. This meta-theory of science, ...

In this light, let us readdress questions which philosphers have already considered.
What is a Theory? How does it relate to a model? When it a model applicable?
What constitutes an effective
predictive power? When are metaphors, and analogies, pertanant? 

What are possible uses of a model? -- Predication, throw out "obviously wrong"
conjectures [Gerlenter], quickly conjecture reasonable ideas, simulate.

Meta-issues: When is a proposed model adequate? Can a model encode its own short-
comings?
What does it take to validate (test) a theory (or model)? When is it totally
discredited, and when can it be "hacked up" to account for this new phenomena?
[See Velikovsky.]
How to decide what counts as reinforcements?
When does it have real explanation capability? [How vs Why]

Remaining issues
Conjecture (eg Circumscription) vs Laws
Cause/Effect ā† Frame problem
Different perspectives of a theory -- or distinct models of some fact.

What is a theory?
Cupernicous, Darwin, <plate techtonics>; Kuhn ā† self descriptive
Methodologies, Esthetics,
Scenario:
User:	I want a KB which knows about Programming Constructs.
Homon:	For what task?
User:	Program verification.
Homon:	Tell me about Program Verification.
User:	It tries to determine whether executing a particular listing of source code
	will achieve a specified goal.
Homon:	Oh, so this task is essentially validation.
User:	Yes.
Homon:	Describe the input which you are trying to validate.
User:	Source code, in the language LISP.
Homon:	Describe the expected results. (Is there a function which ...)
User:	Ies.

Relevant Parts:	Causal sub-model
		Symptons/cures
		Fuzziness


Note Homonculus has own expertise - in knowledge extraction/input; and knowledge
application.
Ie it has own scenario which it follows to extract this data.

This is due to Homonculus's Data Acquisition Frame.
	Homonculus
Large set of tasks - 
	Types of Tasks
<In all - causality, meta-planning, ... >

Teaching, diagnosing, assisting, conjecturing, validation, ...

Hierarchy - so under teaching is: Cut&Dry, Overlay, Differential, ...

Relevant parts for Teaching:
	Derivation of student model

Minimal level understanding of numerous domains
Note RLL just handles Data, and its organization. This system would be capable
of dealing with whole range of things we call expertise:
	Data and its organization
	Utilization of this information
	Means for converting to other tasks
Homonculus has an extensible collection of task-types, 
as well

If P knows the domain of chemistry,
	and the ideas of instruction,
  THEN P can teach chemistry.

If P knows the domain of infectious diseases,
	and the ideas of diagnoses,
  THEN P can diagnose infectious diseases.

If P knows the domain of theories/models,
	and the ideas of knowledge acquisition,
  THEN P can acquire (input in usable form) theories/models.

Not AGE's decision tree.
Not EMYCIN.
Not RWW's MetaFol - but still uses his Simulation Structures.
Not Shank's scripts - as extensible, and arbitrary depth before bottoming out.
Not Psi's program generation (although this is closest.) It works with higher 
level input, less guided  -- mre about expert programs, only.
(It could have Program Generation Model, which executes)